How do brains enable organisms to learn from observing and interacting with the world? How do their architectural constraints shape this learning and the structure of emergent neural representations? How can artificial intelligence inform our understanding of biological intelligence, and vice-versa?
These are some of the questions I am working on as a Postdoctoral Fellow in the Harvard Vision Sciences Lab, in the Psychology Department and Kempner Institute for Natural and Artificial Intelligence. I am primarily advised by Talia Konkle, and collaborate with George Alvarez and others in the Vision Sciences Lab, along with Hanspeter Pfister and others in the Visual Computing Group. Currently, I am focused on developing computational vision models that learn to see more like humans, and can provide greater insights into the neural computations underlying high-level vision. Here, as in my prior work, I am using the modern AI toolkit to implement classic and new ideas from neuroscience and cognitive science, in order to make theoretical scientific advances.
I received my Ph.D in Neural Computation from Carnegie Mellon University in December 2023, where I was advised by David C. Plaut and Marlene Behrmann. My Ph.D work involved the development of computational models of familiar and unfamiliar face recognition, and of cortical organization for visual domains, as well as empirical investigations into the hemispheric organization of high-level visual cortex.
Much of my research continues to build upon the topographic models I developed in my PhD, for example extending it to account for the influence of retinotopy and language on the global organization of human ventral temporal cortex, and in modeling the spatial organization of language processing through topographic Transformer language models (see also our newer work: topoLM). I see this work as a set of critical first steps towards the development of large-scale functional models of the human brain, a challenging task which will unfold over the next few decades.
Practically, my work aims to be useful for building efficient, sustainable AI; as impressive as it is, AI is currently orders of magnitude less energetically efficient than the human brain, which runs on ~20 watts. The spatial embedding of neural computations seems to be a key motif that may contribute to the brain’s remarkable energetic efficiency.
Some pretty but uninformative pictures of me